Aim: Demonstrate how using local (on-premise) large language models (LLMs) along with retrieval (RAG) can help automate and make a simpler process of gathering requirements for automotive specification documents, while keeping company’s intellectual property and privacy safe.
Approach: Create a system that uses a local LLM with a question-and-answer setup and retrieval methods, applied to real or realistic automotive requirement documents.
Compare the results of using LLMs to extract and clarify requirements against doing it manually, using time, accuracy, and ambiguity as measuring tools.
Key findings: Using LLMs significantly reduce the time needed to get actionable requirements and, in many cases, reduces ambiguity, although human checks are still needed to prevent incorrect or made-up information.
Practical contributions: Offers a clear setup (including the model, retrieval method, and evaluation tools), notes on how to implement it on-premises, and suggestions for integrating it into project management workflows.
Limitations: This approach focused only on requirements, not the full lifecycle of a project.
Risks include the possibility of making up information, reliance on the quality of the data used, and balancing model accuracy with the limitations of running models on-premises.
Introduction
The increasing complexity of automotive systems and strict safety and regulatory requirements make requirements engineering (RE) a critical but time-consuming process. Engineers must analyze large volumes of documents while ensuring data confidentiality, which limits the use of cloud-based AI tools due to privacy and intellectual property concerns.
This study explores the use of locally deployed Large Language Models (LLMs) combined with Retrieval-Augmented Generation (RAG) to automate RE tasks securely. Unlike cloud solutions, on-premises LLMs allow organizations to process sensitive data within their own infrastructure while benefiting from advanced language understanding.
Existing research shows that LLMs can improve tasks like requirement extraction, classification, and ambiguity detection, while RAG enhances accuracy by grounding responses in source documents. However, gaps remain in applying these technologies to secure, real-world automotive environments and comparing them with traditional methods.
The proposed approach integrates LLMs and RAG into a system designed to extract, clarify, and manage automotive requirements more efficiently. It is evaluated using metrics such as time savings, accuracy, and ambiguity reduction, and compared with manual processes through controlled experiments.
The study is grounded in theories of requirements engineering, transformer-based language models, information retrieval (RAG), data governance, and human-AI collaboration. It emphasizes a human-in-the-loop approach where AI supports, rather than replaces, engineers.
Overall, the research aims to develop a practical, secure, and efficient AI-assisted solution for automotive requirements engineering, addressing both performance and privacy challenges.
Conclusion
This study shows the use of a locally placed Large Language Model (LLM) combined with a Retrieval-Augmented Generation (RAG) system to help with requirements engineering in the automotive sector. The results shows that this method can significantly boost the efficiency and quality of handling automotive specification documents, while keeping data private and protecting intellectual property by running everything on-site.
Following is the comparison with Manual system Vs using LLM+RAG
References
This study shows the use of a locally placed Large Language Model (LLM) combined with a Retrieval-Augmented Generation (RAG) system to help with requirements engineering in the automotive sector. The results shows that this method can significantly boost the efficiency and quality of handling automotive specification documents, while keeping data private and protecting intellectual property by running everything on-site.
Following is the comparison with Manual system Vs using LLM+RAG